186 research outputs found

    Analytical Model for Constructing Deliberative Agents.

    Get PDF
    This paper introduces a robust mathematical formalism for the definition of deliberative agents implemented using a case-based reasoning system. The concept behind deliberative agents is introduced and the case-based reasoning model is described using this analytical formalism. Variational calculus is used during the reasoning process to identify the problem solution. The agent may use variational calculus to generate plans and modify them at execution time, so they can react to environmental changes in real time. Reflecting the continuous development in the tourism industry as it adapts to new technology, the paper includes the formalisation of an agent developed to assist potential tourists in the organisation of their holidays and to enable them to modify their schedules on the move using wireless communication systems

    IBR retrieval method based on topology preserving mappings

    Get PDF
    Case-based reasoning systems, in general, and instance-based reasoning systems, in particular, are used more and more in industrial applications nowadays. During the last few years, researchers have been working in the development of techniques to automate the reasoning stages identified in this methodology. This paper presents a method for automating the retrieval stage and indexation of instance-based reasoning systems. This method is based on a modification of a new type of topology preserving map that can be used for scale invariant classifica- tion. The scale invariant map is an implementation of the negative feedback network to form a topology-preserving mapping. Maximum/minimum likelihood learning is applied in this paper to the scale invariant map and its possibilities are explored. This method automates the organization of cases and the retrieval stage of case-based reasoning systems. The proposed methodology groups instances with similar struc- ture, identifying clusters automatically in a data set in an unsupervised mode. The method has been successfully used to completely automate the reasoning process of an oceanographic forecasting system and to improve its performance

    WeVoS scale invariant map

    Get PDF
    A novel method for improving the training of some topology preserving algorithms characterized by its scale invariant mapping is presented and analyzed in this study. It is called Weighted Voting Superposition (WeVoS), and in this research is applied to the Scale Invariant Feature Map (SIM) and the Maximum Likelihood Hebbian Learning Scale Invariant Map (Max-SIM) providing two new versions, the WeVoS–SIM and the WeVoS–Max-SIM. The method is based on the training of an ensemble of networks and the combination of them to obtain a single one, including the best features of each one of the networks in the ensemble. To accomplish this combination, a weighted voting process takes place between the units of the maps in the ensemble in order to determine the characteristics of the units of the resulting map. To provide a complete comparative study of these new models, they are compared with their original models, the SIM and Max-SIM and also to probably the best known topology preserving model: the Self-Organizing Map. The models are tested under the frame of two ad hoc artificial data sets and a real-world one, characterized for having an internal radial distribution. Four different quality measures have been applied for each model in order to present a complete study of their capabilities. The results obtained confirm that the novel models presented in this study based on the application of WeVoS can outperform the classic models in terms of organization of the presented information

    10th International Conference, Burgos, Spain, September 23-26, 2009. Proceedings

    Get PDF
    This book constitutes the refereed proceedings of the 10th International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2009, held in Burgos, Sapin, in September 2009. The 100 revised full papers presented were carefully reviewed and selected from over 200 submissions for inclusion in the book. The papers are organized in topical sections on learning and information processing; data mining and information management; neuro-informatics, bio-informatics, and bio-inspired models; agents and hybrid systems; soft computing techniques in data mining; recent advances on swarm-based computing; intelligent computational techniques in medical image processing; advances on ensemble learning and information fursion; financial and business engineering (modeling and applications); MIR day 2009 - Burgos; and nature inspired models for industrial applications

    Connectionist Techniques for the identification and suppression of interfering underlying factors

    Get PDF
    We consider the difficult problem of identification of independent causes from a mixture of them when these causes interfere with one another in a particular manner: those considered are visual inputs to a neural network system which are created by independent underlying causes which may occlude each other. The prototypical problem in this area is a mixture of horizontal and vertical bars in which each horizontal bar interferes with the representation of each vertical bar and vice versa. Previous researchers have developed artificial neural networks which can identify the individual causes; we seek to go further in that we create artificial neural networks which identify all the horizontal bars from only such a mixture. This task is a necessary precursor to the development of the concept of "horizontal" or "vertical"

    Fusion Methods for Unsupervised Learning Ensembles

    Get PDF
    The application of a “committee of experts” or ensemble learning to artificial neural networks that apply unsupervised learning techniques is widely considered to enhance the effectiveness of such networks greatly. This book examines the potential of the ensemble meta-algorithm by describing and testing a technique based on the combination of ensembles and statistical PCA that is able to determine the presence of outliers in high-dimensional data sets and to minimize outlier effects in the final results. Its central contribution concerns an algorithm for the ensemble fusion of topology-preserving maps, referred to as Weighted Voting Superposition (WeVoS), which has been devised to improve data exploration by 2-D visualization over multi-dimensional data sets. This generic algorithm is applied in combination with several other models taken from the family of topology preserving maps, such as the SOM, ViSOM, SIM and Max-SIM. A range of quality measures for topologypreserving maps that are proposed in the literature are used to validate and compare WeVoS with other algorithms. The experimental results demonstrate that, in the majority of cases, the WeVoS algorithm outperforms earlier map-fusion methods and the simpler versions of the algorithm with which it is compared. All the algorithms are tested in different artificial data sets and in several of the most common machine-learning data sets in order to corroborate their theoretical properties. Moreover, a real-life case-study taken from the food industry demonstrates the practical benefits of their application to more complex problems

    A Bio-inspired Fusion Method for Data Visualization

    Get PDF
    This research presents a novel bio-inspired fusion algorithm based on the application of a topology preserving map called Visualization Induced SOM (ViSOM) under the umbrella of an ensemble summarization algorithm, the Weighted Voting Superposition (WeVoS). The presented model aims to obtain more accurate and robust maps, also increasing the models stability by means of the use of an ensemble training schema and a posterior fusion algorithm, been those very suitable for visualization and also classification purposes. This model may be applied alone or under the frame of hybrid intelligent systems, when used for instance in the recovery phase of a case based reasoning system. For the sake of completeness, the comparison of the performance with other topology preserving maps and previous fusion algorithms with several public data set obtained from the UCI repository are also included

    Fusion of Visualization Induced SOM

    Get PDF
    In this study ensemble techniques have been applied in the frame of topology preserving mappings with visualization purposes. A novel extension of the ViSOM (Visualization Induced SOM) is obtained by the use of the ensemble meta-algorithm and a later fusion process. This main fusion algorithm has two different variants, considering two different criteria for the similarity of nodes. These criteria are Euclidean distance and similarity on Voronoi polygons. The goal of this upgrade is to improve the quality and robustness of the single model. Some experiments performed over different datasets applying the two variants of the fusion and other simpler models are included for comparison purposes

    Innovations in nature inspired optimization and learning methods

    Get PDF
    The nine papers included in this special issue represent a selection of extended contributions presented at the Third World Congress on Nature and Biologically Inspired Computing (NaBIC2011), held in Salamanca, Spain, October 19–21, 2011. Papers were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is then aimed at practitioners, researchers and postgraduate students, who are engaged in developing and applying, advanced Nature and Biologically Inspired Computing Models to solving real-world problems. The papers are organized as follows. The first paper by Apeh et al. present a comparative investigation of 4 approaches for classifying dynamic customer profiles built using evolving transactional data over time. The changing class values of the customer profiles were analyzed together with the challenging problem of deciding whether to change the class label or adapt the classifier. The results from the experiments conducted on a highly sparse and skewed real-world transactional data show that adapting the classifiers leads to more stable classification of customer profiles in the shorter time windows; while relabelling the changed customer profile classes leads to more accurate and stable classification in the longer time windows. Frolov et al. suggested in the second paper a new approach to Boolean factor analysis, which is an extension of the previously proposed Boolean factor analysis method: Hopfield-like attractor neural network with increasing activity. The authors increased its applicability and robustness when complementing this method by a maximization of the learning set likelihood function defined according to the Noisy-OR generative model. They demonstrated the efficiency of the new method using the data set generated according to the model. Successful application of the method to the real data is shown when analyzing the data from the Kyoto Encyclopedia of Genes and Genomes database which contains full genome sequencing for 1368 organisms. In the sequel, Triguero et al. analyze the integration of a wide variety of noise filters into the self-training process to distinguish the most relevant features of filters. They are focused on the nearest neighbour rule as a base classifier and ten different noise filters. Then, they provide an extensive analysis of the performance of these filters considering different ratios of labelled data. The results are contrasted with nonparametric statistical tests that allow us to identify relevant filters, and their main characteristics, in the field of semi-supervised learning. In the Fourth paper, Gutiérrez-Avilés et al. present the TriGen algorithm, a genetic algorithm that finds triclusters of gene expression that take into account the experimental conditions and the time points simultaneously. The authors have used TriGen to mine datasets related to synthetic data, yeast (Saccharomyces Cerevisiae) cell cycle and human inflammation and host response to injury experiments. TriGen has proved to be capable of extracting groups of genes with similar patterns in subsets of conditions and times, and these groups have shown to be related in terms of their functional annotations extracted from the Gene Ontology project. In the following paper, Varela et al. introduce and study the application of Constrained Sampling Evolutionary Algorithms in the framework of an UAV based search and rescue scenario. These algorithms have been developed as a way to harness the power of Evolutionary Algorithms (EA) when operating in complex, noisy, multimodal optimization problems and transfer the advantages of their approach to real time real world problems that can be transformed into search and optimization challenges. These types of problems are denoted as Constrained Sampling problems and are characterized by the fact that the physical limitations of reality do not allow for an instantaneous determination of the fitness of the points present in the population that must be evolved. A general approach to address these problems is presented and a particular implementation using Differential Evolution as an example of CS-EA is created and evaluated using teams of UAVs in search and rescue missions.The results are compared to those of a Swarm Intelligence based strategy in the same type of problem as this approach has been widely used within the UAV path-planning field in different variants by many authors. In the Sixth paper, Zhao et al. introduce human intelligence into the computational intelligent algorithms, namely particle swarm optimization (PSO) and immune algorithms (IA). A novel human-computer cooperative PSO-based immune algorithm (HCPSO-IA) is proposed, in which the initial population consists of the initial artificial individuals supplied by human and the initial algorithm individuals are generated by a chaotic strategy. Some new artificial individuals are introduced to replace the inferior individuals of the population. HCPSO-IA benefits by giving free rein to the talents of designers and computers, and contributes to solving complex layout design problems. The experimental results illustrate that the proposed algorithm is feasible and effective. In the sequel, Rebollo-Ruiz and Graña give an extensive empirical evaluation of the innovative nature inspired Gravitational Swarm Intelligence (GSI) algorithm solving the Graph Coloring Problem (GCP). GSI follows Swarm Intelligence problem solving approach, where spatial position of agents are interpreted as problem solutions and agent motion is determined solely by local information, avoiding any central control system. To apply GSI to search for solutions of GCP, the authors map agents to graph's nodes. Agents move as particles in the gravitational field defined by goal objects corresponding to colors. When the agents fall in the gravitational well of the color goal, their corresponding nodes are colored by this color. Graph's connectivity is mapped into a repulsive force between agents corresponding to adjacent nodes. The authors discuss the convergence of the algorithm by testing it over a extensive suite of well-known benchmarking graphs. Comparison of this approach to state-of-the-art approaches in the literature show improvements in many of the benchmark graphs. In the Eighth paper, Macaˇs et al. demonstrates how the novel algorithms can be derived from opinion formation models and empirically demonstrates their usability in the area of binary optimization. Particularly, it introduces a general SITO algorithmic framework and describes four algorithms based on this general framework. Recent applications of these algorithms to pattern recognition in electronic nose, electronic tongue, newborn EEG and ICU patient mortality prediction are discussed. Finally, an open source SITO library for MATLAB and JAVA is introduced. In the final paper, Madureira et al. present a negotiation mechanism for dynamic scheduling based on social and collective intelligence. Under the proposed negotiation mechanism, agents must interact and collaborate in order to improve the global schedule. Swarm Intelligence is considered a general aggregation term for several computational techniques which use ideas and get inspiration from the social behaviors of insects and other biological systems. This work is concerned with negotiation, where multiple self-interested agents can reach agreement over the exchange of operations on competitive resources. A computational study was performed in order to validate the influence of negotiation mechanism in the system performance and the SI technique. From the obtained results it was possible to conclude about statistical evidence that negotiation mechanism influence significantly the overall system performance and about advantage of Artificial Bee Colony on effectiveness of makespan minimization and on the machine occupation maximization. We would like to thank our peer-reviewers for their diligent work and efficient efforts.We are also grateful to the Editor-in-Chief of Neurocomputing, Prof. Tom Heskes, for his continued support for the NABIC conference and for the opportunity to organize this Special issue

    A three-step unsupervised neural model for visualizing high complex dimensional spectroscopic data sets

    Get PDF
    The interdisciplinary research presented in this study is based on a novel approach to clustering tasks and the visualization of the internal structure of high-dimensional data sets. Following normalization, a pre-processing step performs dimensionality reduction on a high-dimensional data set, using an unsupervised neural architecture known as cooperative maximum likelihood Hebbian learning (CMLHL), which is characterized by its capability to preserve a degree of global ordering in the data. Subsequently, the self organising-map (SOM) is applied, as a topology-preserving architecture used for two-dimensional visualization of the internal structure of such data sets. This research studies the joint performance of these two neural models and their capability to preserve some global ordering. Their effectiveness is demonstrated through a case of study on a real-life high complex dimensional spectroscopic data set characterized by its lack of reproducibility. The data under analysis are taken from an X-ray spectroscopic analysis of a rose window in a famous ancient Gothic Spanish cathedral. The main aim of this study is to classify each sample by its date and place of origin, so as to facilitate the restoration of these and other historical stained glass windows. Thus, having ascertained the sample’s chemical composition and degree of conservation, this technique contributes to identifying different areas and periods in which the stained glass panels were produced. The combined method proposed in this study is compared with a classical statistical model that uses principal component analysis (PCA) as a pre-processing step, and with some other unsupervised models such as maximum likelihood Hebbian learning (MLHL) and the application of the SOM without a pre-processing step. In the final case, a comparison of the convergence processes was performed to examine the efficacy of the CMLHL/SOM combined model
    • …
    corecore